首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2896篇
  免费   49篇
  国内免费   40篇
化学   95篇
力学   55篇
综合类   30篇
数学   2524篇
物理学   281篇
  2023年   7篇
  2022年   14篇
  2021年   52篇
  2020年   42篇
  2019年   25篇
  2018年   19篇
  2017年   42篇
  2016年   68篇
  2015年   52篇
  2014年   117篇
  2013年   182篇
  2012年   145篇
  2011年   178篇
  2010年   111篇
  2009年   220篇
  2008年   150篇
  2007年   192篇
  2006年   154篇
  2005年   118篇
  2004年   80篇
  2003年   79篇
  2002年   87篇
  2001年   115篇
  2000年   82篇
  1999年   65篇
  1998年   69篇
  1997年   55篇
  1996年   51篇
  1995年   38篇
  1994年   14篇
  1993年   19篇
  1992年   16篇
  1991年   20篇
  1990年   9篇
  1989年   17篇
  1988年   15篇
  1987年   9篇
  1986年   7篇
  1985年   52篇
  1984年   53篇
  1983年   22篇
  1982年   39篇
  1981年   33篇
  1980年   24篇
  1979年   18篇
  1978年   3篇
  1977年   2篇
  1976年   1篇
  1974年   1篇
  1971年   1篇
排序方式: 共有2985条查询结果,搜索用时 15 毫秒
1.
计算区间二型模糊集的质心(也称降型)是区间二型模糊逻辑系统中的一个重要模块。Karnik-Mendel(KM)迭代算法通常被认为是计算区间二型模糊集质心的标准算法。尽管如此,KM算法涉及复杂的计算过程,不利于实时应用。在各种改进类算法中,非迭代的Nie-Tan(NT)算法可节省计算消耗。此外,连续版本NT(CNT,continuous version of NT)算法被证明是计算质心的准确算法。本文比较了离散版本NT算法中求和运算和连续版本NT算法中求积分运算,通过四个计算机仿真例子证实了当适度增加区间二型模糊集主变量采样个数时,NT算法的计算结果可以精确地逼近CNT算法。  相似文献   
2.
It is shown that the Pauli equation and the concept of spin naturally emerge from logical inference applied to experiments on a charged particle under the conditions that (i) space is homogeneous (ii) the observed events are logically independent, and (iii) the observed frequency distributions are robust with respect to small changes in the conditions under which the experiment is carried out. The derivation does not take recourse to concepts of quantum theory and is based on the same principles which have already been shown to lead to e.g. the Schrödinger equation and the probability distributions of pairs of particles in the singlet or triplet state. Application to Stern–Gerlach experiments with chargeless, magnetic particles, provides additional support for the thesis that quantum theory follows from logical inference applied to a well-defined class of experiments.  相似文献   
3.
The (i) reciprocity relations for the relative Fisher information (RFI, hereafter) and (ii) a generalized RFI–Euler theorem are self-consistently derived from the Hellmann–Feynman theorem. These new reciprocity relations generalize the RFI–Euler theorem and constitute the basis for building up a mathematical Legendre transform structure (LTS, hereafter), akin to that of thermodynamics, that underlies the RFI scenario. This demonstrates the possibility of translating the entire mathematical structure of thermodynamics into a RFI-based theoretical framework. Virial theorems play a prominent role in this endeavor, as a Schrödinger-like equation can be associated to the RFI. Lagrange multipliers are determined invoking the RFI–LTS link and the quantum mechanical virial theorem. An appropriate ansatz allows for the inference of probability density functions (pdf’s, hereafter) and energy-eigenvalues of the above mentioned Schrödinger-like equation. The energy-eigenvalues obtained here via inference are benchmarked against established theoretical and numerical results. A principled theoretical basis to reconstruct the RFI-framework from the FIM framework is established. Numerical examples for exemplary cases are provided.  相似文献   
4.
在使用变量选择方法选出模型后,如何评价模型中变量系数的显著性是统计学重点关注的前沿问题之一.文章从适应性Lasso变量选择方法的选择结果出发,在考虑实践中误差分布多样性的前提下,基于选择事件构造了模型保留变量系数的条件检验统计量,并给出了该统计量的一致收敛性质的证明过程.模拟研究显示,在多种误差分布下所提方法均可进一步优化变量选择结果,有较强的实用价值.应用此方法对CEPS学生数据进行了实证分析,最终选取了学生认知能力等10个变量作为影响中学生成绩的主要因素,为相关研究提供了有益的参考.  相似文献   
5.
In this paper we investigated the stability of fractional order fuzzy cellular neural networks with leakage delay and time varying delays. Based on Lyapunov theory and applying bounded techniques of fractional calculation, sufficient criterion are established to guarantee the stability. Hybrid feedback control is applied to derive the proposed results. Finally, numerical examples with simulation results are given to illustrate the effectiveness of the proposed method.  相似文献   
6.
In this paper we present a new approach, based on the Nearest Interval Approximation Operator, for dealing with a multiobjective programming problem with fuzzy-valued objective functions.  相似文献   
7.
The Analytic Hierarchy Process (AHP) is a measurement methodology based on pair-wise comparisons that relies on judgment to derive priority scales. During its implementation, one constructs hierarchies, then makes judgments or performs measurements on pairs of elements with respect to a criterion to derive preference scales, which are then synthesized throughout the structure to select the preferred alternative.One of the areas where the AHP finds application is in the subjective phases of risk assessment (RA), where it is used to structure and prioritize diverse risk factors, including the judgments of experts. Since fuzzy logic (FL) has been shown to be an effective tool for accommodating human experts and their communication of linguistic variables, there has been research aimed at modeling the fuzziness in the AHP (FAHP), and recently the focus of some of that modeling has been with respect to RA.The literature discusses more than one FAHP model, which raises the question as to which are the prominent models and what are their characteristics. In response to this question, we examine three of the most influential FAHP models. The article proceeds as follows. It begins with a brief overview of the AHP and its limitations when confronted with a fuzzy environment. This is followed by a discussion of FL modifications of the AHP. A RA-based likelihood score example is used throughout. The article ends with a commentary on the findings.  相似文献   
8.
This paper extends the framework for the valuation of life insurance policies and annuities by Andrés-Sánchez and González-Vila (2012, 2014) in two ways. First we allow various uncertain magnitudes to be estimated by means of fuzzy numbers. This applies not only to interest rates but also to the amounts to be paid out by the insurance company. Second, the use of symmetrical triangular fuzzy numbers allows us to obtain expressions for the pricing of life contingencies and their variability that are closely linked to standard financial and actuarial mathematics. Moreover, they are relatively straightforward to compute and understand from a standard actuarial point of view.  相似文献   
9.
Existing risk capital allocation methods, such as the Euler rule, work under the explicit assumption that portfolios are formed as linear combinations of random loss/profit variables, with the firm being able to choose the portfolio weights. This assumption is unrealistic in an insurance context, where arbitrary scaling of risks is generally not possible. Here, we model risks as being partially generated by Lévy processes, capturing the non-linear aggregation of risk. The model leads to non-homogeneous fuzzy games, for which the Euler rule is not applicable. For such games, we seek capital allocations that are in the core, that is, do not provide incentives for splitting portfolios. We show that the Euler rule of an auxiliary linearised fuzzy game (non-uniquely) satisfies the core property and, thus, provides a plausible and easily implemented capital allocation. In contrast, the Aumann–Shapley allocation does not generally belong to the core. For the non-homogeneous fuzzy games studied, Tasche’s (1999) criterion of suitability for performance measurement is adapted and it is shown that the proposed allocation method gives appropriate signals for improving the portfolio underwriting profit.  相似文献   
10.
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this article, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study, we demonstrate our improvements to numerical stability and computational complexity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号